在无模型的深度加强学习(RL)算法中,利用嘈杂的值估计监督政策评估和优化对样品效率有害。由于这种噪声是异源的,因此可以在优化过程中使用基于不确定性的权重来缓解其效果。以前的方法依赖于采样的合奏,这不会捕获不确定性的所有方面。我们对在RL的嘈杂监管中提供了对不确定性的不确定性来源的系统分析,并引入了诸如将概率集合和批处理逆差加权组合的贝叶斯框架的逆差异RL。我们提出了一种方法,其中两个互补的不确定性估计方法占Q值和环境随机性,以更好地减轻嘈杂监督的负面影响。我们的结果表明,对离散和连续控制任务的采样效率方面显着改进。
translated by 谷歌翻译
We present NusaCrowd, a collaborative initiative to collect and unite existing resources for Indonesian languages, including opening access to previously non-public resources. Through this initiative, we have has brought together 137 datasets and 117 standardized data loaders. The quality of the datasets has been assessed manually and automatically, and their effectiveness has been demonstrated in multiple experiments. NusaCrowd's data collection enables the creation of the first zero-shot benchmarks for natural language understanding and generation in Indonesian and its local languages. Furthermore, NusaCrowd brings the creation of the first multilingual automatic speech recognition benchmark in Indonesian and its local languages. Our work is intended to help advance natural language processing research in under-represented languages.
translated by 谷歌翻译
Cloud computing holds the promise of reduced costs through economies of scale. To realize this promise, cloud computing vendors typically solve sequential resource allocation problems, where customer workloads are packed on shared hardware. Virtual machines (VM) form the foundation of modern cloud computing as they help logically abstract user compute from shared physical infrastructure. Traditionally, VM packing problems are solved by predicting demand, followed by a Model Predictive Control (MPC) optimization over a future horizon. We introduce an approximate formulation of an industrial VM packing problem as an MILP with soft-constraints parameterized by the predictions. Recently, predict-and-optimize (PnO) was proposed for end-to-end training of prediction models by back-propagating the cost of decisions through the optimization problem. But, PnO is unable to scale to the large prediction horizons prevalent in cloud computing. To tackle this issue, we propose the Predict-and-Critic (PnC) framework that outperforms PnO with just a two-step horizon by leveraging reinforcement learning. PnC jointly trains a prediction model and a terminal Q function that approximates cost-to-go over a long horizon, by back-propagating the cost of decisions through the optimization problem \emph{and from the future}. The terminal Q function allows us to solve a much smaller two-step horizon optimization problem than the multi-step horizon necessary in PnO. We evaluate PnO and the PnC framework on two datasets, three workloads, and with disturbances not modeled in the optimization problem. We find that PnC significantly improves decision quality over PnO, even when the optimization problem is not a perfect representation of reality. We also find that hardening the soft constraints of the MILP and back-propagating through the constraints improves decision quality for both PnO and PnC.
translated by 谷歌翻译
Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model $M$. For instance, the unicycle model for an F1 racing car. In this light, we consider the following problem - given a model $M$ and state transition dataset, we wish to best approximate the system model while being bounded distance away from $M$. We propose a method to guarantee this conformance. Our first step is to distill the dataset into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network, when the input is drawn from a particular subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified $M$ models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods.
translated by 谷歌翻译
This paper studies audio-visual suppression for egocentric videos -- where the speaker is not captured in the video. Instead, potential noise sources are visible on screen with the camera emulating the off-screen speaker's view of the outside world. This setting is different from prior work in audio-visual speech enhancement that relies on lip and facial visuals. In this paper, we first demonstrate that egocentric visual information is helpful for noise suppression. We compare object recognition and action classification based visual feature extractors, and investigate methods to align audio and visual representations. Then, we examine different fusion strategies for the aligned features, and locations within the noise suppression model to incorporate visual information. Experiments demonstrate that visual features are most helpful when used to generate additive correction masks. Finally, in order to ensure that the visual features are discriminative with respect to different noise types, we introduce a multi-task learning framework that jointly optimizes audio-visual noise suppression and video based acoustic event detection. This proposed multi-task framework outperforms the audio only baseline on all metrics, including a 0.16 PESQ improvement. Extensive ablations reveal the improved performance of the proposed model with multiple active distractors, over all noise types and across different SNRs.
translated by 谷歌翻译
仇恨言语分类一直是自然语言处理中的一个长期问题。但是,即使有许多仇恨言论检测方法,它们通常忽略了许多仇恨言论,因为它们在自然界中是隐含的。开发数据集以协助隐性仇恨言语分类的任务伴随着自己的挑战;困难是语言上的细微差别,改变了构成仇恨言论的定义以及劳动密集型的注释过程。这导致了可用于训练和测试此类系统的数据稀缺,当使用基于参数的变压器模型来解决该问题时,这会引起较高的差异问题。在本文中,我们探讨了各种优化和正则化技术,并开发了一种基于罗伯塔的新型模型,可实现最先进的性能。
translated by 谷歌翻译
推断线性关系是许多实证研究的核心。线性依赖性的度量应正确评估关系的强度,并符合对人群的有意义。 Pearson的相关系数(PCC)是双变量关系的\ textit {De-facto}量度,这两个方面都缺乏。估计的强度$ r $可能是由于样本量有限和数据非正态而可能错误的。在统计显着性测试的背景下,将$ p $值作为后验概率的错误解释导致I型错误 - 这是一个具有显着性测试的一般问题,扩展到PCC。同时测试多个假设时,此类错误会加剧。为了解决这些问题,我们提出了一种基于机器学习的预测数据校准方法,从本质上讲,该方法在预期的线性关系上进行了研究。使用校准数据计算PCC会产生校准的$ P $值,可以将其解释为后验概率以及校准的$ r $估计值,这是其他方法未提供的所需结果。此外,随之而来的对每个测试的独立解释可能会消除对多次测试校正的需求。我们提供了使用多个模拟和对现实世界数据的应用,有利于提出的方法的经验证据。
translated by 谷歌翻译
本作者在较早的论文中介绍和研究了上方的粗糙集。在这项研究中,她在两个不同的粒状方向上扩展了这一点,具有令人惊讶的代数语义。颗粒是基于在上指导性下广义封闭的思想,可能被理解为一种弱结果的形式。这产生了满足谨慎单调的近似算子,而pi-groupoidal近似(另外涉及战略选择和代数运算符)具有更好的特性。这项研究主要是由分布式认知观点,真实或虚拟课堂学习环境以及以学生为中心的教学中的概念结构的动机。还提出了涉及上定向关系的数据集的粗糙聚类技术(如Sentinel项目图像数据)。预计这项研究将在相关领域中看到重要的理论和实际应用。
translated by 谷歌翻译
静息状态脑功能活性对非成像表型的单个主体映射是神经影像学的主要目标。当今应用的绝大多数学习方法都取决于静态表示或短期时间相关性。这与动态性的大脑活动性质不符,并且表现出短期和长期依赖性。此外,在单个任务/数据集上已经开发并验证了新的复杂的深度学习方法。这些模型在研究不同目标的研究中的应用通常需要详尽的超参数搜索,模型工程以及反复试验,以通过更简单的线性模型获得竞争结果。反过来,这限制了他们在快速发展的研究领域中的采用和阻碍公平的基准测试。为此,我们提出了fMRI-S4;一种用于分类表型和精神疾病的多功能深度学习模型,该模型来自静止状态功能磁共振成像扫描时间的时间。 fMRI-S4使用1D卷积和最近引入的状态空间模型S4捕获信号中的短距离和长范围时间依赖性。所提出的体系结构在任务/数据集中具有轻巧,样本效率且健壮。我们在三个多站点RS-FMRI数据集上验证了fMRI-S4诊断重大抑郁症(MDD),自闭症谱系障碍(ASD)和性别分类的任务。我们证明fMRI-S4可以在所有三个任务上均优于现有方法,并且可以作为插件和游戏模型进行培训,而无需针对每种设置进行特殊的超散件调整
translated by 谷歌翻译
哪种结构可以使学习者能够从未标记的数据中发现类?传统方法取决于功能空间的相似性和对数据的英勇假设。在本文中,我们在潜在标签换档(LLS)下介绍了无监督的学习,我们可以从多个域中访问未标记的数据,以便标签边缘$ p_d(y)$可以跨域变化,但是类有条件的$ p(\ mathbf) {x} | y)$不。这项工作实例化了识别类别的新原则:将分组分组的元素。对于有限输入空间,我们在LLS和主题建模之间建立了同构:输入对应于单词,域,文档和标签与主题。解决连续数据时,我们证明,当每个标签的支持包含一个可分离区域时,类似于锚词,Oracle访问$ P(d | \ Mathbf {x})$足以识别$ p_d(y)$和$ p_d( y | \ mathbf {x})$ for排列。因此,我们引入了一种实用算法,该算法利用域 - 歧义模型如下:(i)通过域歧视器$ p(d | \ mathbf {x})推动示例; (ii)通过$ p(d | \ mathbf {x})$ space中的聚类示例来离散数据; (iii)对离散数据执行非负矩阵分解; (iv)将回收的$ P(y | d)$与鉴别器输出$ p(d | \ mathbf {x})$结合在一起计算$ p_d(y | x)\; \ forall d $。通过半合成实验,我们表明我们的算法可以利用域信息来改善无监督的分类方法。当功能空间相似性并不表示真实分组时,我们揭示了标准无监督分类方法的故障模式,并从经验上证明我们的方法可以更好地处理这种情况。我们的结果建立了分销转移与主题建模之间的密切联系,为将来的工作开辟了有希望的界限。
translated by 谷歌翻译